Skip to content

MON-4029: Add KubeStateMetricsConfig to ClusterMonitoring API#2778

Open
danielmellado wants to merge 1 commit intoopenshift:masterfrom
danielmellado:mon_4029_kube_state_metrics
Open

MON-4029: Add KubeStateMetricsConfig to ClusterMonitoring API#2778
danielmellado wants to merge 1 commit intoopenshift:masterfrom
danielmellado:mon_4029_kube_state_metrics

Conversation

@danielmellado
Copy link
Copy Markdown
Contributor

Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.

Signed-off-by: Daniel Mellado dmellado@redhat.com

@openshift-ci-robot
Copy link
Copy Markdown

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 24, 2026
@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Mar 24, 2026

@danielmellado: This pull request references MON-4029 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.

Signed-off-by: Daniel Mellado dmellado@redhat.com

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Mar 24, 2026

Hello @danielmellado! Some important instructions when contributing to openshift/api:
API design plays an important part in the user experience of OpenShift and as such API PRs are subject to a high level of scrutiny to ensure they follow our best practices. If you haven't already done so, please review the OpenShift API Conventions and ensure that your proposed changes are compliant. Following these conventions will help expedite the api review process for your PR.

@openshift-ci openshift-ci Bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Mar 24, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Mar 24, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign deads2k for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@danielmellado
Copy link
Copy Markdown
Contributor Author

@everettraven this PR continues #2461 and rebases and adds your comments. Thanks!

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 24, 2026

📝 Walkthrough

Walkthrough

Added an optional kubeStateMetricsConfig field to ClusterMonitoringSpec and introduced the KubeStateMetricsConfig, KubeStateMetricsResourceLabels, KubeStateMetricsResourceName, and KubeStateMetricsLabelName types plus related constants. Updated autogenerated deepcopy and Swagger documentation for the new types. Extended the ClusterMonitoring CRD schema with validation for nodeSelector, resources, tolerations, topologySpreadConstraints, and additionalResourceLabels. Added new YAML tests exercising many accept/reject cases for spec.kubeStateMetricsConfig.


Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (2 inconclusive)

Check name Status Explanation Resolution
Stable And Deterministic Test Names ❓ Inconclusive The test file referenced in the PR summary could not be located in the repository, making it impossible to verify whether test names are stable and deterministic. Provide the actual test file location or content to assess whether Ginkgo test names contain dynamic information like timestamps or UUIDs.
Test Structure And Quality ❓ Inconclusive Repository does not contain Ginkgo test files; PR adds CRD schema types and validation cases in YAML, not Go-based Ginkgo tests. Verify if this repository uses Ginkgo tests or if the PR includes YAML-based CRD validation tests requiring different quality assessment standards.
✅ Passed checks (10 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: adding KubeStateMetricsConfig to the ClusterMonitoring API, which is the primary purpose of this changeset.
Description check ✅ Passed The description is directly related to the changeset, mentioning the KubeStateMetricsConfig struct addition and configuration options (node selectors, resources, tolerations, topology spread constraints) with integration tests.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Microshift Test Compatibility ✅ Passed No Ginkgo e2e tests were added in this PR. The check only applies when new Ginkgo e2e tests are present.
Single Node Openshift (Sno) Test Compatibility ✅ Passed The test additions are YAML-based CRD schema validation fixtures, not Ginkgo e2e tests, so SNO compatibility check is not applicable.
Topology-Aware Scheduling Compatibility ✅ Passed PR adds optional configuration fields to ClusterMonitoring CRD for kube-state-metrics scheduling without imposing topology-unaware defaults.
Ote Binary Stdout Contract ✅ Passed PR adds only API type definitions, auto-generated methods, and declarative YAML test specs with no process-level code or stdout writes.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed PR does not add Ginkgo e2e tests; only Go type definitions, autogenerated code, and CRD schema updates.
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.11.4)

Error: build linters: unable to load custom analyzer "kubeapilinter": tools/_output/bin/kube-api-linter.so, plugin: not implemented
The command is terminated due to an error: build linters: unable to load custom analyzer "kubeapilinter": tools/_output/bin/kube-api-linter.so, plugin: not implemented


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 1117-1234: The CRD currently documents invariants but doesn't
enforce them; update the schema for the PodTopologySpread-like object to:
restrict whenUnsatisfiable to an enum of allowed values (e.g.,
"DoNotSchedule","ScheduleAnyway") on the whenUnsatisfiable property, add
minimum: 1 (and disallow 0) on maxSkew (format int32) and minimum: 1 on
minDomains (or use minimum: 1 when present), and add a validation rule that
requires labelSelector when matchLabelKeys is set (make matchLabelKeys mutually
exclusive with labelSelector and/or add a x-kubernetes-requirements or OpenAPI
dependency that enforces labelSelector presence when matchLabelKeys exists).
Target the properties named whenUnsatisfiable, maxSkew, minDomains,
matchLabelKeys and labelSelector in the CRD fragment to implement these
constraints.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: a91c322b-92fd-43a0-a96f-b2b4da86aa91

📥 Commits

Reviewing files that changed from the base of the PR and between 324a1bc and 02919b5.

⛔ Files ignored due to path filters (3)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**
📒 Files selected for processing (5)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • config/v1alpha1/zz_generated.deepcopy.go
  • config/v1alpha1/zz_generated.swagger_doc_generated.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

Comment on lines +1117 to +1234
matchLabelKeys:
description: |-
MatchLabelKeys is a set of pod label keys to select the pods over which
spreading will be calculated. The keys are used to lookup values from the
incoming pod labels, those key-value labels are ANDed with labelSelector
to select the group of existing pods over which spreading will be calculated
for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector.
MatchLabelKeys cannot be set when LabelSelector isn't set.
Keys that don't exist in the incoming pod labels will
be ignored. A null or empty list means only match against labelSelector.

This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default).
items:
type: string
type: array
x-kubernetes-list-type: atomic
maxSkew:
description: |-
MaxSkew describes the degree to which pods may be unevenly distributed.
When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference
between the number of matching pods in the target topology and the global minimum.
The global minimum is the minimum number of matching pods in an eligible domain
or zero if the number of eligible domains is less than MinDomains.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 2/2/1:
In this case, the global minimum is 1.
| zone1 | zone2 | zone3 |
| P P | P P | P |
- if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2;
scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2)
violate MaxSkew(1).
- if MaxSkew is 2, incoming pod can be scheduled onto any zone.
When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence
to topologies that satisfy it.
It's a required field. Default value is 1 and 0 is not allowed.
format: int32
type: integer
minDomains:
description: |-
MinDomains indicates a minimum number of eligible domains.
When the number of eligible domains with matching topology keys is less than minDomains,
Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed.
And when the number of eligible domains with matching topology keys equals or greater than minDomains,
this value has no effect on scheduling.
As a result, when the number of eligible domains is less than minDomains,
scheduler won't schedule more than maxSkew Pods to those domains.
If value is nil, the constraint behaves as if MinDomains is equal to 1.
Valid values are integers greater than 0.
When value is not nil, WhenUnsatisfiable must be DoNotSchedule.

For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same
labelSelector spread as 2/2/2:
| zone1 | zone2 | zone3 |
| P P | P P | P P |
The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0.
In this situation, new pod with the same labelSelector cannot be scheduled,
because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,
it will violate MaxSkew.
format: int32
type: integer
nodeAffinityPolicy:
description: |-
NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector
when calculating pod topology spread skew. Options are:
- Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.

If this value is nil, the behavior is equivalent to the Honor policy.
type: string
nodeTaintsPolicy:
description: |-
NodeTaintsPolicy indicates how we will treat node taints when calculating
pod topology spread skew. Options are:
- Honor: nodes without taints, along with tainted nodes for which the incoming pod
has a toleration, are included.
- Ignore: node taints are ignored. All nodes are included.

If this value is nil, the behavior is equivalent to the Ignore policy.
type: string
topologyKey:
description: |-
TopologyKey is the key of node labels. Nodes that have a label with this key
and identical values are considered to be in the same topology.
We consider each <key, value> as a "bucket", and try to put balanced number
of pods into each bucket.
We define a domain as a particular instance of a topology.
Also, we define an eligible domain as a domain whose nodes meet the requirements of
nodeAffinityPolicy and nodeTaintsPolicy.
e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology.
And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology.
It's a required field.
type: string
whenUnsatisfiable:
description: |-
WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
the spread constraint.
- DoNotSchedule (default) tells the scheduler not to schedule it.
- ScheduleAnyway tells the scheduler to schedule the pod in any location,
but giving higher precedence to topologies that would help reduce the
skew.
A constraint is considered "Unsatisfiable" for an incoming pod
if and only if every possible node assignment for that pod would violate
"MaxSkew" on some topology.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 3/1/1:
| zone1 | zone2 | zone3 |
| P P P | P | P |
If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled
to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies
MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler
won't make it *more* imbalanced.
It's a required field.
type: string
required:
- maxSkew
- topologyKey
- whenUnsatisfiable
type: object
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Enforce documented topology constraint invariants in the schema

The descriptions document hard constraints, but the schema does not enforce several of them (whenUnsatisfiable allowed values, positive maxSkew/minDomains, and matchLabelKeys requiring labelSelector). Invalid CRs can pass admission and then fail later during reconciliation/pod creation.

🔧 Proposed schema validation additions
                         maxSkew:
                           description: |-
                             MaxSkew describes the degree to which pods may be unevenly distributed.
@@
                           format: int32
+                          minimum: 1
                           type: integer
                         minDomains:
                           description: |-
                             MinDomains indicates a minimum number of eligible domains.
@@
                           format: int32
+                          minimum: 1
                           type: integer
@@
                         whenUnsatisfiable:
                           description: |-
                             WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
@@
+                          enum:
+                          - DoNotSchedule
+                          - ScheduleAnyway
                           type: string
+                      x-kubernetes-validations:
+                      - message: minDomains can only be set when whenUnsatisfiable is DoNotSchedule
+                        rule: '!has(self.minDomains) || self.whenUnsatisfiable == ''DoNotSchedule'''
+                      - message: matchLabelKeys cannot be set when labelSelector is not set
+                        rule: '!has(self.matchLabelKeys) || has(self.labelSelector)'
                       required:
                       - maxSkew
                       - topologyKey
                       - whenUnsatisfiable

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`
around lines 1117 - 1234, The CRD currently documents invariants but doesn't
enforce them; update the schema for the PodTopologySpread-like object to:
restrict whenUnsatisfiable to an enum of allowed values (e.g.,
"DoNotSchedule","ScheduleAnyway") on the whenUnsatisfiable property, add
minimum: 1 (and disallow 0) on maxSkew (format int32) and minimum: 1 on
minDomains (or use minimum: 1 when present), and add a validation rule that
requires labelSelector when matchLabelKeys is set (make matchLabelKeys mutually
exclusive with labelSelector and/or add a x-kubernetes-requirements or OpenAPI
dependency that enforces labelSelector presence when matchLabelKeys exists).
Target the properties named whenUnsatisfiable, maxSkew, minDomains,
matchLabelKeys and labelSelector in the CRD fragment to implement these
constraints.

@openshift-ci openshift-ci Bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 28, 2026
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from 02919b5 to 361dc49 Compare April 7, 2026 13:37
@openshift-ci openshift-ci Bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 7, 2026
@danielmellado
Copy link
Copy Markdown
Contributor Author

/hold until openshift/cluster-monitoring-operator#2553 merges

@openshift-ci openshift-ci Bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 7, 2026
@danielmellado
Copy link
Copy Markdown
Contributor Author

/cc @rexagod

@openshift-ci openshift-ci Bot requested a review from rexagod April 7, 2026 13:38
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from 361dc49 to d134762 Compare April 17, 2026 12:56
@danielmellado
Copy link
Copy Markdown
Contributor Author

/unhold

@openshift-ci openshift-ci Bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 17, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml (1)

1175-1292: ⚠️ Potential issue | 🟠 Major

Enforce documented topology spread invariants in schema (still missing).

This block documents constraints, but schema-level enforcement is still incomplete (whenUnsatisfiable values, positive maxSkew/minDomains, and matchLabelKeys requiring labelSelector). Invalid CRs can pass admission and fail later.

🔧 Proposed schema validation patch
                         maxSkew:
@@
                           format: int32
+                          minimum: 1
                           type: integer
                         minDomains:
@@
                           format: int32
+                          minimum: 1
                           type: integer
@@
                         whenUnsatisfiable:
@@
+                          enum:
+                          - DoNotSchedule
+                          - ScheduleAnyway
                           type: string
                       required:
                       - maxSkew
                       - topologyKey
                       - whenUnsatisfiable
                       type: object
+                      x-kubernetes-validations:
+                      - message: minDomains can only be set when whenUnsatisfiable is DoNotSchedule
+                        rule: '!has(self.minDomains) || self.whenUnsatisfiable == ''DoNotSchedule'''
+                      - message: matchLabelKeys cannot be set when labelSelector is not set
+                        rule: '!has(self.matchLabelKeys) || has(self.labelSelector)'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`
around lines 1175 - 1292, The CRD currently documents invariants but doesn't
enforce them; update the CRD's openAPIV3Schema to (1) restrict whenUnsatisfiable
to the allowed enum values (e.g., "DoNotSchedule","ScheduleAnyway"), (2) add
minimum: 1 for maxSkew and minDomains (and type integer) so maxSkew cannot be
0/negative and minDomains > 0, (3) add a conditional validation that when
minDomains is set then whenUnsatisfiable must equal "DoNotSchedule", and (4) add
validations requiring that matchLabelKeys cannot be present unless labelSelector
is set and that keys in matchLabelKeys do not overlap with labelSelector keys;
implement the conditional/complex checks using CRD CEL
(x-kubernetes-validations) or equivalent CRD validation mechanisms referencing
the fields matchLabelKeys, labelSelector, maxSkew, minDomains, and
whenUnsatisfiable so invalid CRs are rejected at admission time.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 2145-2158: The test "Should reject KubeStateMetricsConfig with
duplicate resource in additionalResourceLabels" uses an invalid resource value
"jobs" which triggers enum validation before duplicate detection; update the two
entries under spec.kubeStateMetricsConfig.additionalResourceLabels to use a
valid resource (for example "pods" or another accepted enum value) so the YAML
passes resource validation and the duplicate-resource check is exercised by
ClusterMonitoring validation.

---

Duplicate comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 1175-1292: The CRD currently documents invariants but doesn't
enforce them; update the CRD's openAPIV3Schema to (1) restrict whenUnsatisfiable
to the allowed enum values (e.g., "DoNotSchedule","ScheduleAnyway"), (2) add
minimum: 1 for maxSkew and minDomains (and type integer) so maxSkew cannot be
0/negative and minDomains > 0, (3) add a conditional validation that when
minDomains is set then whenUnsatisfiable must equal "DoNotSchedule", and (4) add
validations requiring that matchLabelKeys cannot be present unless labelSelector
is set and that keys in matchLabelKeys do not overlap with labelSelector keys;
implement the conditional/complex checks using CRD CEL
(x-kubernetes-validations) or equivalent CRD validation mechanisms referencing
the fields matchLabelKeys, labelSelector, maxSkew, minDomains, and
whenUnsatisfiable so invalid CRs are rejected at admission time.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 5847c428-e700-46b3-84f5-f167574373e9

📥 Commits

Reviewing files that changed from the base of the PR and between 361dc49 and d134762.

⛔ Files ignored due to path filters (6)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
  • openapi/openapi.json is excluded by !openapi/**
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • config/v1alpha1/types_cluster_monitoring.go

Comment on lines +2145 to +2158
- name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels
initial: |
apiVersion: config.openshift.io/v1alpha1
kind: ClusterMonitoring
spec:
kubeStateMetricsConfig:
additionalResourceLabels:
- resource: jobs
labels:
- foo
- resource: jobs
labels:
- bar
expectedError: "Duplicate value"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate resource test uses invalid resource values.

This test intends to validate duplicate detection for additionalResourceLabels, but uses jobs which is not a valid resource value. The test may fail for the wrong reason (invalid enum) before reaching the duplicate validation. Use valid resource values to properly test duplicate detection.

🔧 Proposed fix
     - name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels
       initial: |
         apiVersion: config.openshift.io/v1alpha1
         kind: ClusterMonitoring
         spec:
           kubeStateMetricsConfig:
             additionalResourceLabels:
-              - resource: jobs
+              - resource: nodes
                 labels:
                   - foo
-              - resource: jobs
+              - resource: nodes
                 labels:
                   - bar
       expectedError: "Duplicate value"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels
initial: |
apiVersion: config.openshift.io/v1alpha1
kind: ClusterMonitoring
spec:
kubeStateMetricsConfig:
additionalResourceLabels:
- resource: jobs
labels:
- foo
- resource: jobs
labels:
- bar
expectedError: "Duplicate value"
- name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels
initial: |
apiVersion: config.openshift.io/v1alpha1
kind: ClusterMonitoring
spec:
kubeStateMetricsConfig:
additionalResourceLabels:
- resource: nodes
labels:
- foo
- resource: nodes
labels:
- bar
expectedError: "Duplicate value"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`
around lines 2145 - 2158, The test "Should reject KubeStateMetricsConfig with
duplicate resource in additionalResourceLabels" uses an invalid resource value
"jobs" which triggers enum validation before duplicate detection; update the two
entries under spec.kubeStateMetricsConfig.additionalResourceLabels to use a
valid resource (for example "pods" or another accepted enum value) so the YAML
passes resource validation and the duplicate-resource check is exercised by
ClusterMonitoring validation.

@rexagod
Copy link
Copy Markdown
Member

rexagod commented Apr 22, 2026

@danielmellado could you please rebase this? Thanks again for the patch! :)

@openshift-ci openshift-ci Bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 22, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 22, 2026

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.

Signed-off-by: Daniel Mellado <dmellado@redhat.com>
Signed-off-by: Daniel Mellado <dmellado@fedoraproject.org>
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from d134762 to d1719c1 Compare April 23, 2026 10:21
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml (1)

2231-2506: Add explicit negative tests for nodeSelector and tolerations in kubeStateMetricsConfig.

nodeSelector and tolerations are only exercised in the all-fields happy path (Line 2237 and Line 2243). Unlike sibling configs in this file, there are no reject cases for empty/oversized values, so regressions in those validations could slip through.

🧪 Suggested test additions
+    - name: Should reject KubeStateMetricsConfig with empty nodeSelector
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          kubeStateMetricsConfig:
+            nodeSelector: {}
+      expectedError: 'spec.kubeStateMetricsConfig.nodeSelector: Invalid value: 0: spec.kubeStateMetricsConfig.nodeSelector in body should have at least 1 properties'
+
+    - name: Should reject KubeStateMetricsConfig with more than 10 nodeSelector entries
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          kubeStateMetricsConfig:
+            nodeSelector:
+              key1: val1
+              key2: val2
+              key3: val3
+              key4: val4
+              key5: val5
+              key6: val6
+              key7: val7
+              key8: val8
+              key9: val9
+              key10: val10
+              key11: val11
+      expectedError: 'spec.kubeStateMetricsConfig.nodeSelector: Too many: 11: must have at most 10 items'
+
+    - name: Should reject KubeStateMetricsConfig with empty tolerations array
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          kubeStateMetricsConfig:
+            tolerations: []
+      expectedError: 'spec.kubeStateMetricsConfig.tolerations: Invalid value: 0: spec.kubeStateMetricsConfig.tolerations in body should have at least 1 items'
+
+    - name: Should reject KubeStateMetricsConfig with more than 10 tolerations
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          kubeStateMetricsConfig:
+            tolerations:
+              - key: "key1"
+                operator: "Exists"
+              - key: "key2"
+                operator: "Exists"
+              - key: "key3"
+                operator: "Exists"
+              - key: "key4"
+                operator: "Exists"
+              - key: "key5"
+                operator: "Exists"
+              - key: "key6"
+                operator: "Exists"
+              - key: "key7"
+                operator: "Exists"
+              - key: "key8"
+                operator: "Exists"
+              - key: "key9"
+                operator: "Exists"
+              - key: "key10"
+                operator: "Exists"
+              - key: "key11"
+                operator: "Exists"
+      expectedError: 'spec.kubeStateMetricsConfig.tolerations: Too many: 11: must have at most 10 items'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`
around lines 2231 - 2506, The test suite lacks negative cases for
kubeStateMetricsConfig.nodeSelector and kubeStateMetricsConfig.tolerations; add
reject tests that mirror existing patterns: (1) a case where
kubeStateMetricsConfig.nodeSelector is {} and assert an expectedError about
requiring at least one property; (2) a case where
kubeStateMetricsConfig.nodeSelector has an empty key or invalid value (if schema
enforces non-empty strings); (3) a case where kubeStateMetricsConfig.tolerations
is [] and assert an expectedError about requiring at least one item; and (4) a
case with too many tolerations (or duplicate tolerations) to assert the "Too
many" or "Duplicate value" errors; reference the kubeStateMetricsConfig,
nodeSelector, and tolerations keys when adding these new test entries consistent
with the surrounding test format.
config/v1alpha1/types_cluster_monitoring.go (1)

2502-2568: Consider extracting the shared scheduling/resources shape.

This block re-declares the same nodeSelector / resources / tolerations / topologySpreadConstraints shape already present in OpenShiftStateMetricsConfig. Pulling the common part into a shared struct would reduce drift in docs and validation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config/v1alpha1/types_cluster_monitoring.go` around lines 2502 - 2568,
KubeStateMetricsConfig duplicates the scheduling/resources fields from
OpenShiftStateMetricsConfig; extract a shared struct (e.g.,
PodSchedulingAndResources) containing NodeSelector, Resources, Tolerations, and
TopologySpreadConstraints and replace those fields in both
KubeStateMetricsConfig and OpenShiftStateMetricsConfig with a single
embedded/shared field of that new struct; update kubebuilder tags and JSON names
on the new struct to preserve validation (Min/MaxItems, listType/listMapKey,
optional, etc.) and adjust any references to the original fields in
code/comments to use the new struct.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@config/v1alpha1/types_cluster_monitoring.go`:
- Around line 2601-2627: KubeStateMetricsLabelName currently only enforces
length; add CEL validation and tighten the max length to enforce valid
Kubernetes label keys or the wildcard "*". Update the type
KubeStateMetricsLabelName to change the MaxLength from 256 to 253 and add a
+kubebuilder:validation:XValidation CEL rule that allows either self == "*" or a
DNS1123 subdomain prefix + "/" + qualified name (or no prefix) using split('/')
and format.dns1123Subdomain.validate for the prefix and a regex for the name
(^[A-Za-z0-9]([-A-Za-z0-9_.]*[A-Za-z0-9])?$), with an appropriate message like
"must be a valid Kubernetes label key or *"; keep the existing MinLength and
ensure the new validation applies to KubeStateMetricsLabelName so Labels
[]KubeStateMetricsLabelName inherits it.

---

Nitpick comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 2231-2506: The test suite lacks negative cases for
kubeStateMetricsConfig.nodeSelector and kubeStateMetricsConfig.tolerations; add
reject tests that mirror existing patterns: (1) a case where
kubeStateMetricsConfig.nodeSelector is {} and assert an expectedError about
requiring at least one property; (2) a case where
kubeStateMetricsConfig.nodeSelector has an empty key or invalid value (if schema
enforces non-empty strings); (3) a case where kubeStateMetricsConfig.tolerations
is [] and assert an expectedError about requiring at least one item; and (4) a
case with too many tolerations (or duplicate tolerations) to assert the "Too
many" or "Duplicate value" errors; reference the kubeStateMetricsConfig,
nodeSelector, and tolerations keys when adding these new test entries consistent
with the surrounding test format.

In `@config/v1alpha1/types_cluster_monitoring.go`:
- Around line 2502-2568: KubeStateMetricsConfig duplicates the
scheduling/resources fields from OpenShiftStateMetricsConfig; extract a shared
struct (e.g., PodSchedulingAndResources) containing NodeSelector, Resources,
Tolerations, and TopologySpreadConstraints and replace those fields in both
KubeStateMetricsConfig and OpenShiftStateMetricsConfig with a single
embedded/shared field of that new struct; update kubebuilder tags and JSON names
on the new struct to preserve validation (Min/MaxItems, listType/listMapKey,
optional, etc.) and adjust any references to the original fields in
code/comments to use the new struct.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: d4a84268-dde5-4503-a8ce-f4b5c2c51857

📥 Commits

Reviewing files that changed from the base of the PR and between d134762 and d1719c1.

⛔ Files ignored due to path filters (6)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
  • openapi/openapi.json is excluded by !openapi/**
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

Comment on lines +2601 to +2627
// KubeStateMetricsLabelName is the name of a Kubernetes label to expose as a metric
// via kube-state-metrics. Use "*" to expose all labels for a resource.
// Must be at least 1 character and at most 256 characters in length.
// +kubebuilder:validation:MinLength=1
// +kubebuilder:validation:MaxLength=256
type KubeStateMetricsLabelName string

// KubeStateMetricsResourceLabels defines which Kubernetes labels to expose as metrics
// for a given resource type in kube-state-metrics.
type KubeStateMetricsResourceLabels struct {
// resource is the Kubernetes resource name whose labels should be exposed as metrics.
// Currently, only "jobs" and "cronjobs" are supported due to cardinality concerns.
// Valid values are "jobs" and "cronjobs".
// This field is required.
// +required
Resource KubeStateMetricsResourceName `json:"resource,omitempty"`
// labels is the list of Kubernetes label names to expose as metrics for this resource.
// Use "*" to expose all labels for the specified resource.
// This field is required.
// Each label name must be unique within this list.
// Minimum length for this list is 1.
// Maximum length for this list is 50.
// +required
// +kubebuilder:validation:MinItems=1
// +kubebuilder:validation:MaxItems=50
// +listType=set
Labels []KubeStateMetricsLabelName `json:"labels,omitempty"`
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

For Kubernetes CRD / kubebuilder validation, what is the recommended validation pattern or CEL format helper for a Kubernetes label key (optional DNS prefix + "/" + qualified name), and how should a special wildcard value "*" be handled?

💡 Result:

The recommended validation pattern for a Kubernetes label key in CRD/kubebuilder uses CEL with a custom expression that splits the key on '/' (max 2 parts), validates the optional prefix as DNS1123 subdomain using format.dns1123Subdomain, and the required name part (max 63 chars) against the label name regex ^A-Za-z0-9?$. For the special wildcard "" value, explicitly allow it with self == "" || . In kubebuilder Go struct: // +kubebuilder:validation:MaxLength=253 // +kubebuilder:validation:XValidation:rule=self == "" || (size(parts := self.split("/")) == 1 || (size(parts) == 2 && format.dns1123Subdomain.validate(parts[0]).empty && size(parts[1]) <= 63 && parts[1].matches("^A-Za-z0-9?$"))),message="must be a valid Kubernetes label key or " LabelKey string json:"labelKey,omitempty" Equivalent CRD OpenAPIv3Schema: properties: labelKey: type: string maxLength: 253 x-kubernetes-validations: - rule: 'self == "" || (size(parts := self.split("/")) == 1 || (size(parts) == 2 && !format.dns1123Subdomain.validate(parts[0]).hasValue && size(parts[1]) <= 63 && parts[1].matches("^A-Za-z0-9?$")))' message: must be a valid Kubernetes label key (optional DNS prefix/qualified name, max 253 chars total) or the wildcard "" This uses Kubernetes CEL format library for prefix validation (available since ~1.28), regex for name segment, and explicit wildcard handling. Prefix max 253 chars is enforced by field maxLength (name <=63 checked separately). No built-in "labelKey" format exists; custom CEL is standard practice per controller-tools discussions.

Citations:


🏁 Script executed:

cat -n config/v1alpha1/types_cluster_monitoring.go | sed -n '2595,2630p'

Repository: openshift/api

Length of output: 1936


Add CEL validation for Kubernetes label key format on KubeStateMetricsLabelName.

The type currently only constrains length, so invalid values like "bad label" (contains space) or "☃" (non-ASCII) would be admitted despite documentation describing this as a Kubernetes label name. Add CEL validation to enforce either the wildcard "*" or a valid Kubernetes label key format (optional DNS1123 prefix + "/" + qualified name matching ^[A-Za-z0-9]([-A-Za-z0-9_.]*[A-Za-z0-9])?$). Also reduce MaxLength from 256 to 253 to match the Kubernetes DNS name limit. Without this, invalid additionalResourceLabels can be stored and only fail downstream.

Suggested validation:

// +kubebuilder:validation:MaxLength=253
// +kubebuilder:validation:XValidation:rule="self == '*' || (size(parts := self.split('/')) == 1 || (size(parts) == 2 && format.dns1123Subdomain.validate(parts[0]) == '' && size(parts[1]) <= 63 && parts[1].matches('^[A-Za-z0-9]([-A-Za-z0-9_.]*[A-Za-z0-9])?$')))",message="must be a valid Kubernetes label key or *"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config/v1alpha1/types_cluster_monitoring.go` around lines 2601 - 2627,
KubeStateMetricsLabelName currently only enforces length; add CEL validation and
tighten the max length to enforce valid Kubernetes label keys or the wildcard
"*". Update the type KubeStateMetricsLabelName to change the MaxLength from 256
to 253 and add a +kubebuilder:validation:XValidation CEL rule that allows either
self == "*" or a DNS1123 subdomain prefix + "/" + qualified name (or no prefix)
using split('/') and format.dns1123Subdomain.validate for the prefix and a regex
for the name (^[A-Za-z0-9]([-A-Za-z0-9_.]*[A-Za-z0-9])?$), with an appropriate
message like "must be a valid Kubernetes label key or *"; keep the existing
MinLength and ensure the new validation applies to KubeStateMetricsLabelName so
Labels []KubeStateMetricsLabelName inherits it.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 23, 2026

@danielmellado: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

// as metrics by kube-state-metrics. Currently, only "jobs" and "cronjobs" are supported
// due to cardinality concerns.
// Valid values are "jobs" and "cronjobs".
// +kubebuilder:validation:Enum=jobs;cronjobs
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(nit) should be Jobs and CronJobs to follow k8s API conventions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants